15 research outputs found

    Optimal set of EEG features for emotional state classification and trajectory visualization in Parkinson's disease

    Get PDF
    In addition to classic motor signs and symptoms, individuals with Parkinson's disease (PD) are characterized by emotional deficits. Ongoing brain activity can be recorded by electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study utilized machine-learning algorithms to categorize emotional states in PD patients compared with healthy controls (HC) using EEG. Twenty non-demented PD patients and 20 healthy age-, gender-, and education level-matched controls viewed happiness, sadness, fear, anger, surprise, and disgust emotional stimuli while fourteen-channel EEG was being recorded. Multimodal stimulus (combination of audio and visual) was used to evoke the emotions. To classify the EEG-based emotional states and visualize the changes of emotional states over time, this paper compares four kinds of EEG features for emotional state classification and proposes an approach to track the trajectory of emotion changes with manifold learning. From the experimental results using our EEG data set, we found that (a) bispectrum feature is superior to other three kinds of features, namely power spectrum, wavelet packet and nonlinear dynamical analysis; (b) higher frequency bands (alpha, beta and gamma) play a more important role in emotion activities than lower frequency bands (delta and theta) in both groups and; (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning. This provides a promising way of implementing visualization of patient's emotional state in real time and leads to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders

    On the analysis of EEG power, frequency and asymmetry in Parkinson's disease during emotion processing

    Get PDF
    Objective: While Parkinson’s disease (PD) has traditionally been described as a movement disorder, there is growing evidence of disruption in emotion information processing associated with the disease. The aim of this study was to investigate whether there are specific electroencephalographic (EEG) characteristics that discriminate PD patients and normal controls during emotion information processing. Method: EEG recordings from 14 scalp sites were collected from 20 PD patients and 30 age-matched normal controls. Multimodal (audio-visual) stimuli were presented to evoke specific targeted emotional states such as happiness, sadness, fear, anger, surprise and disgust. Absolute and relative power, frequency and asymmetry measures derived from spectrally analyzed EEGs were subjected to repeated ANOVA measures for group comparisons as well as to discriminate function analysis to examine their utility as classification indices. In addition, subjective ratings were obtained for the used emotional stimuli. Results: Behaviorally, PD patients showed no impairments in emotion recognition as measured by subjective ratings. Compared with normal controls, PD patients evidenced smaller overall relative delta, theta, alpha and beta power, and at bilateral anterior regions smaller absolute theta, alpha, and beta power and higher mean total spectrum frequency across different emotional states. Inter-hemispheric theta, alpha, and beta power asymmetry index differences were noted, with controls exhibiting greater right than left hemisphere activation. Whereas intra-hemispheric alpha power asymmetry reduction was exhibited in patients bilaterally at all regions. Discriminant analysis correctly classified 95.0% of the patients and controls during emotional stimuli. Conclusion: These distributed spectral powers in different frequency bands might provide meaningful information about emotional processing in PD patients

    A real time neurophysiological framework for general monitoring awareness of air traffic controllers

    No full text
    With the increasing traffic volume, air traffic controllers (ATCos) highly efficient performance plays an essential part in ensuring the safety and managing within limited manpower and resources. To ensure the performance, one way is to perform situation awareness (SA) examination. However, the known SA methods (such as text query) are either subjective or inapplicable in a practical scenario. Therefore, the use of physiological signals is becoming popular. In this work, a real time monitoring approach is proposed to assess a general monitoring awareness while looking at the events happening at the radar display during air traffic control (ATC), using neurophysiological measures taken from electroencephalogram (EEG) signals along with eye-tracking metrics such as eye fixation count and duration. Seven university engineering students participated in the attentive and non-attentive radar monitoring activities. The preliminary experimental results revealed that the real-time data of EEG, average fixation count, and fixation duration highlight distinct differences in levels between attentive and non-attentive monitoring activities (individual and collective). Also, the cognitive resource required for air traffic management (ATM) monitoring is relatively high. Such measures can be used as complementary data sets to gauge and validate an ATCos general SACivil Aviation Authority of Singapore (CAAS)Accepted versionThis project is supported by the Civil Aviation Authority of Singapore and Nanyang Technological University, Singapore under their collaboration in the Air Traffic Management Research Institute. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Civil Aviation Authority of Singapore. The authors would also like to thank all the participants for their valuable time

    Assessing attentive monitoring levels in dynamic environments through visual neuro-assisted approach

    No full text
    This work aims to establish a framework in measuring the various attentional levels of the human operator in a real-time animated environment through a visual neuro-assisted approach.Civil Aviation Authority of Singapore (CAAS)Published versionThis research is supported by the Civil Aviation Authority of Singapore and Nanyang Technological University, Singapore under their collaboration in the Air Traffic Management Research Institute

    Comprehensive Analysis of Feature Extraction Methods for Emotion Recognition from Multichannel EEG Recordings

    No full text
    Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time

    Emotion Recognition from Spatio-Temporal Representation of EEG Signals via 3D-CNN with Ensemble Learning Techniques

    No full text
    The recognition of emotions is one of the most challenging issues in human–computer interaction (HCI). EEG signals are widely adopted as a method for recognizing emotions because of their ease of acquisition, mobility, and convenience. Deep neural networks (DNN) have provided excellent results in emotion recognition studies. Most studies, however, use other methods to extract handcrafted features, such as Pearson correlation coefficient (PCC), Principal Component Analysis, Higuchi Fractal Dimension (HFD), etc., even though DNN is capable of generating meaningful features. Furthermore, most earlier studies largely ignored spatial information between the different channels, focusing mainly on time domain and frequency domain representations. This study utilizes a pre-trained 3D-CNN MobileNet model with transfer learning on the spatio-temporal representation of EEG signals to extract features for emotion recognition. In addition to fully connected layers, hybrid models were explored using other decision layers such as multilayer perceptron (MLP), k-nearest neighbor (KNN), extreme learning machine (ELM), XGBoost (XGB), random forest (RF), and support vector machine (SVM). Additionally, this study investigates the effects of post-processing or filtering output labels. Extensive experiments were conducted on the SJTU Emotion EEG Dataset (SEED) (three classes) and SEED-IV (four classes) datasets, and the results obtained were comparable to the state-of-the-art. Based on the conventional 3D-CNN with ELM classifier, SEED and SEED-IV datasets showed a maximum accuracy of 89.18% and 81.60%, respectively. Post-filtering improved the emotional classification performance in the hybrid 3D-CNN with ELM model for SEED and SEED-IV datasets to 90.85% and 83.71%, respectively. Accordingly, spatial-temporal features extracted from the EEG, along with ensemble classifiers, were found to be the most effective in recognizing emotions compared to state-of-the-art methods

    Investigating the Effects of Microclimate on Physiological Stress and Brain Function with Data Science and Wearables

    No full text
    This paper reports a study conducted by students as an independent research project under the mentorship of a research scientist at the National Institute of Education, Singapore. The aim of the study was to explore the relationships between local environmental stressors and physiological responses from the perspective of citizen science. Starting from July 2021, data from EEG headsets were complemented by those obtained from smartwatches (namely heart rate and its variability and body temperature and stress score). Identical units of a wearable device containing environmental sensors (such as ambient temperature, air pressure, infrared radiation, and relative humidity) were designed and worn, respectively, by five adolescents for the same period. More than 100,000 data points of different types—neurological, physiological, and environmental—were eventually collected and were processed through a random forest regression model and deep learning models. The results showed that the most influential microclimatic factors on the biometric indicators were noise and the concentrations of carbon dioxide and dust. Subsequently, more complex inferences were made from the Shapley value interpretation of the regression models. Such findings suggest implications for the design of living conditions with respect to the interaction of the microclimate and human health and comfort

    Time-Frequency Decomposition of Scalp Electroencephalograms Improves Deep Learning-Based Epilepsy Diagnosis

    No full text
    Epilepsy diagnosis based on Interictal Epileptiform Discharges (IEDs) in scalp electroencephalograms (EEGs) is laborious and often subjective. Therefore, it is necessary to build an effective IED detector and an automatic method to classify IED-free versus IED EEGs. In this study, we evaluate features that may provide reliable IED detection and EEG classification. Specifically, we investigate the IED detector based on convolutional neural network (ConvNet) with different input features (temporal, spectral, and wavelet features). We explore different ConvNet architectures and types, including 1D (one-dimensional) ConvNet, 2D (two-dimensional) ConvNet, and noise injection at various layers. We evaluate the EEG classification performance on five independent datasets. The 1D ConvNet with preprocessed full-frequency EEG signal and frequency bands (delta, theta, alpha, beta) with Gaussian additive noise at the output layer achieved the best IED detection results with a false detection rate of 0.23/min at 90% sensitivity. The EEG classification system obtained a mean EEG classification Leave-One-Institution-Out (LOIO) cross-validation (CV) balanced accuracy (BAC) of 78.1% (area under the curve (AUC) of 0.839) and Leave-One-Subject-Out (LOSO) CV BAC of 79.5% (AUC of 0.856). Since the proposed classification system only takes a few seconds to analyze a 30-min routine EEG, it may help in reducing the human effort required for epilepsy diagnosis. Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Circuits and System

    Improving automated diagnosis of epilepsy from EEGs beyond IEDs

    No full text
    Objective. Clinical diagnosis of epilepsy relies partially on identifying interictal epileptiform discharges (IEDs) in scalp electroencephalograms (EEGs). This process is expert-biased, tedious, and can delay the diagnosis procedure. Beyond automatically detecting IEDs, there are far fewer studies on automated methods to differentiate epileptic EEGs (potentially without IEDs) from normal EEGs. In addition, the diagnosis of epilepsy based on a single EEG tends to be low. Consequently, there is a strong need for automated systems for EEG interpretation. Traditionally, epilepsy diagnosis relies heavily on IEDs. However, since not all epileptic EEGs exhibit IEDs, it is essential to explore IED-independent EEG measures for epilepsy diagnosis. The main objective is to develop an automated system for detecting epileptic EEGs, both with or without IEDs. In order to detect epileptic EEGs without IEDs, it is crucial to include EEG features in the algorithm that are not directly related to IEDs. Approach. In this study, we explore the background characteristics of interictal EEG for automated and more reliable diagnosis of epilepsy. Specifically, we investigate features based on univariate temporal measures (UTMs), spectral, wavelet, Stockwell, connectivity, and graph metrics of EEGs, besides patient-related information (age and vigilance state). The evaluation is performed on a sizeable cohort of routine scalp EEGs (685 epileptic EEGs and 1229 normal EEGs) from five centers across Singapore, USA, and India. Main results. In comparison with the current literature, we obtained an improved Leave-One-Subject-Out (LOSO) cross-validation (CV) area under the curve (AUC) of 0.871 (Balanced Accuracy (BAC) of 80.9%) with a combination of three features (IED rate, and Daubechies and Morlet wavelets) for the classification of EEGs with IEDs vs. normal EEGs. The IED-independent feature UTM achieved a LOSO CV AUC of 0.809 (BAC of 74.4%). The inclusion of IED-independent features also helps to improve the EEG-level classification of epileptic EEGs with and without IEDs vs. normal EEGs, achieving an AUC of 0.822 (BAC of 77.6%) compared to 0.688 (BAC of 59.6%) for classification only based on the IED rate. Specifically, the addition of IED-independent features improved the BAC by 21% in detecting epileptic EEGs that do not contain IEDs. Significance. These results pave the way towards automated detection of epilepsy. We are one of the first to analyze epileptic EEGs without IEDs, thereby opening up an underexplored option in epilepsy diagnosis.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Signal Processing System
    corecore